Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 10 de 10
Filter
1.
J Supercomput ; : 1-32, 2022 Aug 26.
Article in English | MEDLINE | ID: covidwho-2235264

ABSTRACT

Bidirectional generative adversarial networks (BiGANs) and cycle generative adversarial networks (CycleGANs) are two emerging machine learning models that, up to now, have been used as generative models, i.e., to generate output data sampled from a target probability distribution. However, these models are also equipped with encoding modules, which, after weakly supervised training, could be, in principle, exploited for the extraction of hidden features from the input data. At the present time, how these extracted features could be effectively exploited for classification tasks is still an unexplored field. Hence, motivated by this consideration, in this paper, we develop and numerically test the performance of a novel inference engine that relies on the exploitation of BiGAN and CycleGAN-learned hidden features for the detection of COVID-19 disease from other lung diseases in computer tomography (CT) scans. In this respect, the main contributions of the paper are twofold. First, we develop a kernel density estimation (KDE)-based inference method, which, in the training phase, leverages the hidden features extracted by BiGANs and CycleGANs for estimating the (a priori unknown) probability density function (PDF) of the CT scans of COVID-19 patients and, then, in the inference phase, uses it as a target COVID-PDF for the detection of COVID diseases. As a second major contribution, we numerically evaluate and compare the classification accuracies of the implemented BiGAN and CycleGAN models against the ones of some state-of-the-art methods, which rely on the unsupervised training of convolutional autoencoders (CAEs) for attaining feature extraction. The performance comparisons are carried out by considering a spectrum of different training loss functions and distance metrics. The obtained classification accuracies of the proposed CycleGAN-based (resp., BiGAN-based) models outperform the corresponding ones of the considered benchmark CAE-based models of about 16% (resp., 14%).

2.
5th IEEE International Conference on Computer and Communication Engineering Technology, CCET 2022 ; : 115-119, 2022.
Article in English | Scopus | ID: covidwho-2136130

ABSTRACT

Computed Tomography (CT) is an authoritative verification standard for patients with Corona Virus Disease 2019 (COVID-19). Automatic detection of lung infection through CT is of great significance for epidemic prevention and control and prevention of cross-infection. The accuracy of existing lung CT image segmentation methods is not high, and due to the privacy protection measures of hospitals, the number of COVID-19 lung CT data sets is too small, which is prone to over-fitting during training. In this paper, we propose a qualitative mapping model for the diagnosis and localization of COVID-19 lesions. The binary image processed by U-net network is used as input, and lung CT is segmented as four attributes, and attribute diagnosis is carried out with the help of correlation matrix and transformation degree function. Experiments show that this method not only avoids the over-fitting risk of data sets, but also increases the robustness of data. Experiments also prove that this design has higher accuracy than the simple neural network learning. © 2022 IEEE.

3.
Appl Soft Comput ; 125: 109111, 2022 Aug.
Article in English | MEDLINE | ID: covidwho-1944285

ABSTRACT

COVID-19 spreads and contracts people rapidly, to diagnose this disease accurately and timely is essential for quarantine and medical treatment. RT-PCR plays a crucial role in diagnosing the COVID-19, whereas computed tomography (CT) delivers a faster result when combining artificial assistance. Developing a Deep Learning classification model for detecting the COVID-19 through CT images is conducive to assisting doctors in consultation. We proposed a feature complement fusion network (FCF) for detecting COVID-19 through lung CT scan images. This framework can extract both local features and global features by CNN extractor and ViT extractor severally, which successfully complement the deficiency problem of the receptive field of the other. Due to the attention mechanism in our designed feature complement Transformer (FCT), extracted local and global feature embeddings achieve a better representation. We combined a supervised with a weakly supervised strategy to train our model, which can promote CNN to guide the VIT to converge faster. Finally, we got a 99.34% accuracy on our test set, which surpasses the current state-of-art popular classification model. Moreover, this proposed structure can easily extend to other classification tasks when changing other proper extractors.

4.
Medical Imaging 2022: Image Processing ; 12032, 2022.
Article in English | Scopus | ID: covidwho-1901886

ABSTRACT

Deep learning has shown successful performance not only in supervised disease detection but also lesion localization under the weakly supervised learning framework with medical image processing. However, few consider the semantic relationship among the diseases and lesions which plays a critical role in actual clinical diagnosis. In this work, we propose a novel framework: Feature map Graph Representational Probabilistic Class Activation Map (FGR-PCAM) to learn the graph structure of lesion-specific features and consider these relationships while also leveraging the localization ability of PCAM. Considering the relations of localized lesion-specific features has been shown to enhance both thoracic diseases classification and localization tasks on CheXpert and ChestXray14 datasets. Accurate classification and localization of Chest X-ray images would also help us fight against the COVID-19 and unveil COVID-19 fingerprints. © 2022 SPIE

5.
19th IEEE International Symposium on Biomedical Imaging, ISBI 2022 ; 2022-March, 2022.
Article in English | Scopus | ID: covidwho-1846118

ABSTRACT

To aid clinicians diagnose diseases and monitor lesion conditions more efficiently, automated lesion segmentation is a convincing approach. As it is time-consuming and costly to obtain pixel-level annotations, weakly-supervised learning has become a promising trend. Recent works based on Class Activation Mapping (CAM) achieve success for natural images, but they have not fully utilized the intensity property in medical images such that the performance may not be good enough. In this work, we propose a novel weakly-supervised lesion segmentation framework with self-guidance by CT intensity clustering. The proposed method takes full advantages of the properties that CT intensity represents the density of materials and partitions pixels into different groups by intensity clustering. Clusters with high lesion probability determined by the CAM are selected to generate lesion masks. Such lesion masks are used to derive self-guided loss functions which improve the CAM for better lesion segmentation. Our method achieves the Dice score of 0.5874 on the COVID-19 dataset and 0.4534 on the Liver Tumor Segmentation Challenge (LiTS) dataset. © 2022 IEEE.

6.
18th IEEE International Symposium on Biomedical Imaging (ISBI) ; : 1966-1970, 2021.
Article in English | Web of Science | ID: covidwho-1822031

ABSTRACT

Despite tremendous efforts, it is very challenging to generate a robust model to assist in the accurate quantification assessment of COVID-19 on chest CT images. Due to the nature of blurred boundaries, the supervised segmentation methods usually suffer from annotation biases. To support unbiased lesion localisation and to minimise the labelling costs, we propose a data-driven framework supervised by only image level labels. The framework can explicitly separate potential lesions from original images, with the help of an generative adversarial network and a lesion-specific decoder. Experiments on two COVID-19 datasets demonstrates the effectiveness of the proposed framework and its superior performance to several existing methods.

7.
Comput Methods Programs Biomed ; 218: 106731, 2022 May.
Article in English | MEDLINE | ID: covidwho-1719551

ABSTRACT

Artificial intelligence (AI) and computer vision (CV) methods become reliable to extract features from radiological images, aiding COVID-19 diagnosis ahead of the pathogenic tests and saving critical time for disease management and control. Thus, this review article focuses on cascading numerous deep learning-based COVID-19 computerized tomography (CT) imaging diagnosis research, providing a baseline for future research. Compared to previous review articles on the topic, this study pigeon-holes the collected literature very differently (i.e., its multi-level arrangement). For this purpose, 71 relevant studies were found using a variety of trustworthy databases and search engines, including Google Scholar, IEEE Xplore, Web of Science, PubMed, Science Direct, and Scopus. We classify the selected literature in multi-level machine learning groups, such as supervised and weakly supervised learning. Our review article reveals that weak supervision has been adopted extensively for COVID-19 CT diagnosis compared to supervised learning. Weakly supervised (conventional transfer learning) techniques can be utilized effectively for real-time clinical practices by reusing the sophisticated features rather than over-parameterizing the standard models. Few-shot and self-supervised learning are the recent trends to address data scarcity and model efficacy. The deep learning (artificial intelligence) based models are mainly utilized for disease management and control. Therefore, it is more appropriate for readers to comprehend the related perceptive of deep learning approaches for the in-progress COVID-19 CT diagnosis research.


Subject(s)
COVID-19 , Deep Learning , Artificial Intelligence , COVID-19/diagnostic imaging , COVID-19 Testing , Humans , SARS-CoV-2 , Tomography, X-Ray Computed/methods
8.
Appl Soft Comput ; 116: 108291, 2022 Feb.
Article in English | MEDLINE | ID: covidwho-1568513

ABSTRACT

The world is currently experiencing an ongoing pandemic of an infectious disease named coronavirus disease 2019 (i.e., COVID-19), which is caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). Computed Tomography (CT) plays an important role in assessing the severity of the infection and can also be used to identify those symptomatic and asymptomatic COVID-19 carriers. With a surge of the cumulative number of COVID-19 patients, radiologists are increasingly stressed to examine the CT scans manually. Therefore, an automated 3D CT scan recognition tool is highly in demand since the manual analysis is time-consuming for radiologists and their fatigue can cause possible misjudgment. However, due to various technical specifications of CT scanners located in different hospitals, the appearance of CT images can be significantly different leading to the failure of many automated image recognition approaches. The multi-domain shift problem for the multi-center and multi-scanner studies is therefore nontrivial that is also crucial for a dependable recognition and critical for reproducible and objective diagnosis and prognosis. In this paper, we proposed a COVID-19 CT scan recognition model namely coronavirus information fusion and diagnosis network (CIFD-Net) that can efficiently handle the multi-domain shift problem via a new robust weakly supervised learning paradigm. Our model can resolve the problem of different appearance in CT scan images reliably and efficiently while attaining higher accuracy compared to other state-of-the-art methods.

9.
IEEE Access ; 8: 155987-156000, 2020.
Article in English | MEDLINE | ID: covidwho-1528284

ABSTRACT

Deep Learning-based chest Computed Tomography (CT) analysis has been proven to be effective and efficient for COVID-19 diagnosis. Existing deep learning approaches heavily rely on large labeled data sets, which are difficult to acquire in this pandemic situation. Therefore, weakly-supervised approaches are in demand. In this paper, we propose an end-to-end weakly-supervised COVID-19 detection approach, ResNext+, that only requires volume level data labels and can provide slice level prediction. The proposed approach incorporates a lung segmentation mask as well as spatial and channel attention to extract spatial features. Besides, Long Short Term Memory (LSTM) is utilized to acquire the axial dependency of the slices. Moreover, a slice attention module is applied before the final fully connected layer to generate the slice level prediction without additional supervision. An ablation study is conducted to show the efficiency of the attention blocks and the segmentation mask block. Experimental results, obtained from publicly available datasets, show a precision of 81.9% and F1 score of 81.4%. The closest state-of-the-art gives 76.7% precision and 78.8% F1 score. The 5% improvement in precision and 3% in the F1 score demonstrate the effectiveness of the proposed method. It is worth noticing that, applying image enhancement approaches do not improve the performance of the proposed method, sometimes even harm the scores, although the enhanced images have better perceptual quality.

10.
Pattern Recognit ; 122: 108341, 2022 Feb.
Article in English | MEDLINE | ID: covidwho-1415697

ABSTRACT

Segmentation of infections from CT scans is important for accurate diagnosis and follow-up in tackling the COVID-19. Although the convolutional neural network has great potential to automate the segmentation task, most existing deep learning-based infection segmentation methods require fully annotated ground-truth labels for training, which is time-consuming and labor-intensive. This paper proposed a novel weakly supervised segmentation method for COVID-19 infections in CT slices, which only requires scribble supervision and is enhanced with the uncertainty-aware self-ensembling and transformation-consistent techniques. Specifically, to deal with the difficulty caused by the shortage of supervision, an uncertainty-aware mean teacher is incorporated into the scribble-based segmentation method, encouraging the segmentation predictions to be consistent under different perturbations for an input image. This mean teacher model can guide the student model to be trained using information in images without requiring manual annotations. On the other hand, considering the output of the mean teacher contains both correct and unreliable predictions, equally treating each prediction in the teacher model may degrade the performance of the student network. To alleviate this problem, the pixel level uncertainty measure on the predictions of the teacher model is calculated, and then the student model is only guided by reliable predictions from the teacher model. To further regularize the network, a transformation-consistent strategy is also incorporated, which requires the prediction to follow the same transformation if a transform is performed on an input image of the network. The proposed method has been evaluated on two public datasets and one local dataset. The experimental results demonstrate that the proposed method is more effective than other weakly supervised methods and achieves similar performance as those fully supervised.

SELECTION OF CITATIONS
SEARCH DETAIL